720 research outputs found

    Power matters: Foucaultā€™s pouvoir/savoir as a conceptual lens in information research and practice

    Full text link
    Ā© the author, 2015. Introduction. This paper advocates Foucault's notion of pouvoir/savoir (power/knowledge) as a conceptual lens that information researchers might fruitfully use to develop a richer understanding of the relationship between knowledge and power. Methods. Three of the authorsā€™ earlier studies are employed to illustrate the use of this conceptual lens. Methodologically, the studies are closely related: they adopted a qualitative research design and made use of semi-structured and/or conversational, in-depth interviews as their primary method of data collection. The data were analysed using an inductive, discourse analytic approach. Analysis. The paper provides a brief introduction to Foucaultā€™s concept before examining the information practices of academic, professional and artistic communities. Through concrete empirical examples, the authors aim to demonstrate how a Foucauldian lens will provide a more in-depth understanding of how particular information practices exert authority in a discourse community while other such practices may be construed as ineffectual. Conclusion. The paper offers a radically different conceptual lens through which researchers can study information practices, not in individual or acultural terms but as a social construct, both a product and a generator of power/knowledge

    Modelling of gas dynamical properties of the KATRIN tritium source and implications for the neutrino mass measurement

    Get PDF
    The KATRIN experiment aims to measure the effective mass of the electron antineutrino from the analysis of electron spectra stemming from the beta-decay of molecular tritium with a sensitivity of 200 meV. Therefore, a daily throughput of about 40 g of gaseous tritium is circulated in a windowless source section. An accurate description of the gas flow through this section is of fundamental importance for the neutrino mass measurement as it significantly influences the generation and transport of beta-decay electrons through the experimental setup. In this paper we present a comprehensive model consisting of calculations of rarefied gas flow through the different components of the source section ranging from viscous to free molecular flow. By connecting these simulations with a number of experimentally determined operational parameters the gas model can be refreshed regularly according to the measured operating conditions. In this work, measurement and modelling uncertainties are quantified with regard to their implications for the neutrino mass measurement. We find that the systematic uncertainties related to the description of gas flow are represented by Ī”mĪ½2=(āˆ’3.06Ā±0.24)ā‹…10āˆ’3\Delta m_{\nu}^2=(-3.06\pm 0.24)\cdot10^{-3} eV2^2, and that the gas model is ready to be used in the analysis of upcoming KATRIN data.Comment: 28 pages, 13 figure

    Camera-based in-process quality measurement of hairpin welding

    Get PDF
    The technology of hairpin welding, which is frequently used in the automotive industry, entails high-quality requirements in the welding process. It can be difficult to trace the defect back to the affected weld if a non-functioning stator is detected during the final inspection. Often, a visual assessment of a cooled weld seam does not provide any information about its strength. However, based on the behavior during welding, especially about spattering, conclusions can be made about the quality of the weld. In addition, spatter on the component can have serious consequences. In this paper, we present in-process monitoring of laser-based hairpin welding. Using an in-process image analyzed by a neural network, we present a spatter detection method that allows conclusions to be drawn about the quality of the weld. In this way, faults caused by spattering can be detected at an early stage and the affected components sorted out. The implementation is based on a small data set and under consideration of a fast process time on hardware with limited computing power. With a network architecture that uses dilated convolutions, we obtain a large receptive field and can therefore consider feature interrelation in the image. As a result, we obtain a pixel-wise classifier, which allows us to infer the spatter areas directly on the production lines

    Towards Global People Detection and Tracking using Multiple Depth Sensors

    Get PDF

    A Fixed-Point Quantization Technique for Convolutional Neural Networks Based on Weight Scaling

    Get PDF

    Analysis of AI-Based Single-View 3D Reconstruction Methods for an Industrial Application

    Get PDF
    Machine learning (ML) is a key technology in smart manufacturing as it provides insights into complex processes without requiring deep domain expertise. This work deals with deep learning algorithms to determine a 3D reconstruction from a single 2D grayscale image. The potential of 3D reconstruction can be used for quality control because the height values contain relevant information that is not visible in 2D data. Instead of 3D scans, estimated depth maps based on a 2D input image can be used with the advantage of a simple setup and a short recording time. Determining a 3D reconstruction from a single input image is a difficult task for which many algorithms and methods have been proposed in the past decades. In this work, three deep learning methods, namely stacked autoencoder (SAE), generative adversarial networks (GANs) and U-Nets are investigated, evaluated and compared for 3D reconstruction from a 2D grayscale image of laser-welded components. In this work, different variants of GANs are tested, with the conclusion that Wasserstein GANs (WGANs) are the most robust approach among them. To the best of our knowledge, the present paper considers for the first time the U-Net, which achieves outstanding results in semantic segmentation, in the context of 3D reconstruction tasks. Unlike the U-Net, which uses standard convolutions, the stacked dilated U-Net (SDU-Net) applies stacked dilated convolutions. Of all the 3D reconstruction approaches considered in this work, the SDU-Net shows the best performance, not only in terms of evaluation metrics but also in terms of computation time. Due to the comparably small number of trainable parameters and the suitability of the architecture for strong data augmentation, a robust model can be generated with only a few training data

    Verification Witnesses

    Get PDF
    Over the last years, witness-based validation of verification results has become an established practice in software verification: An independent validator re-establishes verification results of a software verifier using verification witnesses, which are stored in a standardized exchange format. In addition to validation, such exchangable information about proofs and alarms found by a verifier can be shared across verification tools, and users can apply independent third-party tools to visualize and explore witnesses to help them comprehend the causes of bugs or the reasons why a given program is correct. To achieve the goal of making verification results more accessible to engineers, it is necessary to consider witnesses as first-class exchangeable objects, stored independently from the source code and checked independently from the verifier that produced them, respecting the important principle of separation of concerns. We present the conceptual principles of verification witnesses, give a description of how to use them, provide a technical specification of the exchange format for witnesses, and perform an extensive experimental study on the application of witness-based result validation, using the validators CPAchecker, UAutomizer, CPA-witness2test, and FShell-witness2test

    Ischemic preconditioning attenuates portal venous plasma concentrations of purines following warm liver ischemia in man

    Get PDF
    Background/Aims: Degradation of adenine nucleotides to adenosine has been suggested to play a critical role in ischemic preconditioning (IPC). Thus, we questioned in patients undergoing partial hepatectomy whether (i) IPC will increase plasma purine catabolites and whether (ii) formation of purines in response to vascular clamping (Pringle maneuver) can be attenuated by prior IPC. Methods: 75 patients were randomly assigned to three groups: group I underwent hepatectomy without vascular clamping; group II was subjected to the Pringle maneuver during resection, and group III was preconditioned (10 min ischemia and 10 min reperfusion) prior to the Pringle maneuver for resection. Central, portal venous and arterial plasma concentrations of adenosine, inosine, hypoxanthine and xanthine were determined by high-performance liquid chromatography. Results: Duration of the Pringle maneuver did not differ between patients with or without IPC. Surgery without vascular clamping had only a minor effect on plasma purine transiently increased. After the Pringle maneuver alone, purine plasma concentrations were most increased. This strong rise in plasma purines caused by the Pringle maneuver, however, was significantly attenuated by IPC. When portal venous minus arterial concentration difference was calculated for inosine or hypoxanthine, the respective differences became positive in patients subjected to the Pringle maneuver and were completely prevented by preconditioning. Conclusion: These data demonstrate that (i) IPC increases formation of adenosine, and that (ii) the unwanted degradation of adenine nucleotides to purines caused by the Pringle maneuver can be attenuated by IPC. Because IPC also induces a decrease of portal venous minus arterial purine plasma concentration differences, IPC might possibly decrease disturbances in the energy metabolism in the intestine as well. Copyright (C) 2005 S. Karger AG, Basel

    SMT-based Model Checking for Recursive Programs

    Full text link
    We present an SMT-based symbolic model checking algorithm for safety verification of recursive programs. The algorithm is modular and analyzes procedures individually. Unlike other SMT-based approaches, it maintains both "over-" and "under-approximations" of procedure summaries. Under-approximations are used to analyze procedure calls without inlining. Over-approximations are used to block infeasible counterexamples and detect convergence to a proof. We show that for programs and properties over a decidable theory, the algorithm is guaranteed to find a counterexample, if one exists. However, efficiency depends on an oracle for quantifier elimination (QE). For Boolean Programs, the algorithm is a polynomial decision procedure, matching the worst-case bounds of the best BDD-based algorithms. For Linear Arithmetic (integers and rationals), we give an efficient instantiation of the algorithm by applying QE "lazily". We use existing interpolation techniques to over-approximate QE and introduce "Model Based Projection" to under-approximate QE. Empirical evaluation on SV-COMP benchmarks shows that our algorithm improves significantly on the state-of-the-art.Comment: originally published as part of the proceedings of CAV 2014; fixed typos, better wording at some place
    • ā€¦
    corecore